Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Using Monte Carlo simulated PPGs signals to train a deep learning model to predict hemoglobin levelsVolpe, Giovanni; Pereira, Joana B; Brunner, Daniel; Ozcan, Aydogan (Ed.)Measuring Hemoglobin (Hb) levels is required for the assessment of different health conditions, such as anemia, a condition where there are insufficient healthy red blood cells to carry enough oxygen to the body's tissues. Measuring Hb levels requires the extraction of a blood sample, which is then sent to a laboratory for analysis. This is an invasive procedure that may add challenges to the continuous monitoring of Hb levels. Noninvasive techniques, including imaging and photoplethysmography (PPG) signals combined with machine learning techniques, are being investigated for continuous measurements of Hb. However, the availability of real data to train the algorithms is limited to establishing a generalization and implementation of such techniques in healthcare settings. In this work, we present a computational model based on Monte Carlo simulations that can generate multispectral PPG signals that cover a broad range of Hb levels. These signals are then used to train a Deep Learning (DL) model to estimate hemoglobin levels. Through this approach, valuable insights about the relationships between PPG signals, oxygen saturation, and Hb levels are learned by the DL model. The signals were generated by propagating a source in a volume that contains the skin tissue properties and the target physiological parameters. The source consisted of plane waves using the 660 nm and 890 nm wavelengths. A range of 6 g/dL to 18 dL Hb values was used to generate 468 PPGs to train a Convolutional Neural Network (CNN). The initial results show high accuracy in detecting low levels of Hb. To the best of our knowledge, the complexity of biological interactions involved in measuring hemoglobin levels has yet to be fully modeled. The presented model offers an alternative approach to studying the effects of changes in Hb levels on the PPGs signal morphology and its interaction with other physiological parameters that are present in the optical path of the measured signals.more » « less
-
Data fusion techniques have gained special interest in remote sensing due to the available capabilities to obtain measurements from the same scene using different instruments with varied resolution domains. In particular, multispectral (MS) and hyperspectral (HS) imaging fusion is used to generate high spatial and spectral images (HSEI). Deep learning data fusion models based on Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) have been developed to achieve such task.In this work, we present a Multi-Level Propagation Learning Network (MLPLN) based on a LSTM model but that can be trained with variable data sizes in order achieve the fusion process. Moreover, the MLPLN provides an intrinsic data augmentation feature that reduces the required number of training samples. The proposed model generates a HSEI by fusing a high-spatial resolution MS image and a low spatial resolution HS image. The performance of the model is studied and compared to existing CNN and LSTM approaches by evaluating the quality of the fused image using the structural similarity metric (SSIM). The results show that an increase in the SSIM is still obtained while reducing of the number of training samples to train the MLPLN model.more » « less
-
Messinger, David W.; Velez-Reyes, Miguel (Ed.)Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusionmore » « less
-
Messinger, David W.; Velez-Reyes, Miguel (Ed.)Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets.more » « less
-
Pixel-level fusion of satellite images coming from multiple sensors allows for an improvement in the quality of the acquired data both spatially and spectrally. In particular, multispectral and hyperspectral images have been fused to generate images with a high spatial and spectral resolution. In literature, there are several approaches for this task, nonetheless, those techniques still present a loss of relevant spatial information during the fusion process. This work presents a multi scale deep learning model to fuse multispectral and hyperspectral data, each with high-spatial-and-low-spectral resolution (HSaLS) and low-spatial-and-high-spectral resolution (LSaHS) respectively. As a result of the fusion scheme, a high-spatial-and-spectral resolution image (HSaHS) can be obtained. In order of accomplishing this result, we have developed a new scalable high spatial resolution process in which the model learns how to transition from low spatial resolution to an intermediate spatial resolution level and finally to the high spatial-spectral resolution image. This step-by-step process reduces significantly the loss of spatial information. The results of our approach show better performance in terms of both the structural similarity index and the signal to noise ratio.more » « less
An official website of the United States government
